7 ways to improve Node.js performance at scale 您所在的位置:网站首页 cluster module 7 ways to improve Node.js performance at scale

7 ways to improve Node.js performance at scale

#7 ways to improve Node.js performance at scale| 来源: 网络整理| 查看: 265

Idorenyin Obong Follow Software engineer with a flair for writing. 7 ways to improve Node.js performance at scale

June 10, 2020 7 min read 2011

7 Ways to Improve Node.js Performance at Scal

Node.js is a solution for executing JavaScript code outside a browser. The versatility and flexibility of JavaScript on the server side enable you to develop highly performant applications.

In this tutorial, we’ll explore Node’s efficiency and performance and show how it can help you achieve better results with fewer resources. We’ll focus primarily on caching, using a load balancer and WebSockets, and monitoring your application. By the end of this guide, you’ll have the tools and approaches you need to build a Node.js application that performs well at scale.

1. Frontend tooling Module bundlers and task runners

On the front end, it’s imperative that whatever is shipped to the browser is as small as possible. This especially includes images, JavaScript, and CSS files. The process that makes this possible involves module bundlers (e.g., webpack, Parcel, Rollup) and task runners (e.g., Gulp, Grunt, etc.).

Module bundlers are build tools for processing groups of modules and their dependencies into a file or group of files. Under the hood, this all happens using Node.js. The output of such minification can then be deployed to production. The minification process can vary depending on the tool you use, but for the most part, you can use the standardized format for code modules included in the ES6 revision of JavaScript.

This allows for complex transforms, such as shortening multicharacter variable names or using a shorter syntax that’s equivalent to the original code and combining several JavaScript files into one to reduce the number of network requests. This also applies to CSS minification; the extra whitespace and comments are removed to help the browser parse it faster.

CSS modules and preprocessors

In the context of reducing browser requests during page load, CSS is no different when it comes to minification. CSS preprocessors such as PostCSS, Sass, and LESS provide variables, functions, and mix-ins to simplify the maintenance of CSS code and make refactoring less challenging. Furthermore, they compile all files into a single .css file, which reduces the number of round trips the browser has to make to serve the file.

With modern tooling that runs on Node.js, such as the aforementioned bundlers, scoped CSS names can be converted into global unique names. Now loading a CSS module to the local scope of your component is as simple as requiring or importing it like you would with any other JavaScript module.

Images

Images are another thing to consider when shipping code to the browser. Generally speaking, the lighter your images, the better. You might want to use compressed images or serve different images, depending on the device. One example that comes to mind is Gatsby, which is powered by Node.js behind the scenes and has a slew of plugins that leverage Node, some of which are specifically designed to transform images at build time into smaller ones and serve them on demand.

2. SSL/TLS and HTTP/2

When building a Node.js application, you can use HTTP/2 to make web browsing faster and easier and minimize bandwidth usage. HTTP/2 focuses on improving performance and solving issues associated with HTTP/1.x.

Features of HTTP/2 include:

Header compression – This removes unnecessary headers and forces all HTTP headers to be sent in compressed format. Multiplexing- This allows multiple requests to retrieve resources and response messages in a single TCP connection simultaneously.

The goal of multiplexing is to minimise the number of requests made to the server. The amount of time required to create an HTTP connection is often more costly than the amount of time required to transmit the data itself. To utilise HTTP/2, there is a need to implement Transport Layer Security (TLS) and Secure Socket Layer (SSL) protocols. Node.js’ core implementation here makes it very easy to setup an HTTP/2 server.

3. Caching

Caching is a common technique to improve app performance. It’s done both on the client and server side. Client-side caching is the temporary storing of contents such as HTML pages, CSS stylesheets, JS scripts, and multimedia contents. Client caches help limit data cost by keeping commonly referenced data locally on the browser or a content delivery network (CDN). An example of client caching is when the browser keeps frequently used data locally or data stored on a CDN. The idea is that when a user visits a site and then returns to it, the site should not have to redownload all the resources again.

HTTP makes this possible via cache headers. Cache headers come in two forms.

Expires specifies the date upon which the resource must be requested again Cache-Control: max-age specifies for how many seconds the resource is valid

Unless the resource has a cache header, the browser can only re-request the resource after the cache expiry date has passed. This approach has its drawbacks. For instance, what happens when a resource changes? Somehow the cache has to be broken. You can solve this via the cache busting approach by adding a version number to the resource URL. When the URL changes, the resource is redownloaded. This is easy to do with Node.js tooling such as webpack.

Even if we enable client-side caching, the app server will still need to render data for each different user accessing the app, so there needs to be an implementation of caching on the server-side. In Node.js, you can use Redis to store temporary data, known as object caching. In most cases, you can combine client- and server-side caching to optimize performance.

Over 200k developers use LogRocket to create better digital experiences Learn more → 4. Optimizing data handling methods

Optimization is key to performance because it simplifies system processes and boosts overall app efficiency. You might be wondering, what can be optimized in a Node.js application? Start by looking at how data is handled is. Node.js programs can be slow due to a CPU/IO-bound operation, such as a database query or slow API call.

For most Node.js applications, data fetching is done via an API request and a response is returned. How do you optimize that? One common method is pagination — i.e., separating responses into batches of content that can be browsed via selective response requests. You can us pagination to optimize the response while at the same time maintaining the greater amount of data that is passed to the user client.

Filtering is another effective approach — specifically, enabling the restriction of results by the criteria of the requester itself. Not only does this reduce the overall number of calls that are made and the results that are shown, but it also enables users to very precisely decide whether resources are provided based on their requirements. These two concepts are common in REST API design.

Underfetching and overfetching relate to how data is fetched. The former provides more data than is appropriate or useful to the client, and the latter does not respond with adequate data, often requiring a separate call to another endpoint to complete the data collection. These two can occur from the client side and can be a result of poor app scaling. GraphQL is useful against this kind of problem because the server doesn’t have to guess what it needs; the client defines their request and gets exactly what they expected.

5. Load balancing

Building performant applications that can handle a large number of incoming connections is a common challenge. A common solution is to distribute the traffic to balance the connections. This approach is known as load balancing. Fortunately, Node.js allows you to duplicate an application instance to handle more connections. This can be done on a single multicore server or through multiple servers.

To scale the Node.js app on a multicore server, you can use the introduced cluster module, which spawns new processes called workers (one for each CPU core) that all run simultaneously and connect to a single master process, allowing the processes to share the same server port. In that way, it behaves like one big, multithreaded Node.js server. You can use the cluster module to enable load balancing and distribute incoming connections according to a round-robin strategy across all the workers over an environment’s multiple CPU cores.

Another approach is to use the PM2 process manager to keep applications alive forever. This helps to avoid downtime by reloading the app whenever there’s a code change or error. PM2 comes with a cluster feature that enables you to run multiple processes across all cores without worrying about any code changes to implement the native cluster module.

The single-cluster setup has its drawbacks, and we need to prepare ourselves to switch from single-server architecture to a multiserver one with load balancing using reverse proxying. NGINX supports load balancing across multiple Node.js servers and various load balancing methods, including:

Round robin — A new request goes to the next server in a list Least connections — A new request goes to the server that has the fewest active connections IP hash — A new request goes to the server assigned to a hash of the client’s IP address.

The reverse proxy feature protects the Node.js server from direct exposure to internet traffic and gives you a great deal of flexibility when using multiple application servers.

6. Secure client-side authentication

Most web apps need to keep the state to give users a personalized experience. If users can sign in to your site, you need to hold sessions for them.

More great articles from LogRocket: Don't miss a moment with The Replay, a curated newsletter from LogRocket Learn how LogRocket's Galileo cuts through the noise to proactively resolve issues in your app Use React's useEffect to optimize your application's performance Switch between multiple versions of Node Discover how to use the React children prop with TypeScript Explore creating a custom mouse cursor with CSS Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

When implementing stateful authentication, you would typically generate a random session identifier to store the session details on the server. To scale a stateful solution to a load-balanced application across multiple servers, you can use a central storage solution such as Redis to store session data or the IP hash method (in load balancing) to ensure that the user always reaches the same web server.

Such a stateful approach has its drawbacks. For example, limiting users to a specific server can lead to issues when that server needs some sort of maintenance.

Stateless authentication with JWT is another scalable approach — arguably, a better one. The advantage is that data is always available, regardless of which machine is serving a user. A typical JWT implementation involves generating a token when a user logs in. This token is a base64 encoding of a JSON object containing the necessary user details. The token is sent back to the client and used to authenticate every API request.

7. Using WebSockets for effective server communication

The internet has traditionally been developed around the HTTP request/response model. WebSockets are an alternative to HTTP communications in web applications. They provide a long-lived, bidirectional communication channel between the client and the server. If established, the channel is kept open, offering a very quick and persistent connection between the client and the server. Both parties can start sending data at any time with low latency and overhead.

HTTP is useful for occasional data sharing and client-driven communication that involves user interaction. With WebSockets, the server may send a message to the client without an explicit request from the client, allowing them to talk to each other simultaneously. This is great for real-time and long-lived communications. ws is a popular library for Node.js that is used to implement a WebSockets server. On the front end, JavaScript is used to establish a connection to a WebSockets-enabled server and can then listen for events. Holding a large number of connections open at the same time requires a high-competition architecture at a low cost of performance, and this is what WebSockets offers.

Conclusion

In this guide, we reviewed the effect of Node.js on frontend tools, how HTTP/2 enhances Node.js performance, specific caching solutions, and data handling methods you can use to enhance Node.js performance. Then we discussed how to achieve load balancing on a Node.js app to manage more connections, the effect of stateful and stateless client-side authentication on scalability, and, finally, how WebSockets can provide a stable connection between client and server. Now you’ve got everything you need to leverage Node.js performance capabilities and write efficient applications that your users will love.

200’s only Monitor failed and slow network requests in production Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third party services are successful, try LogRocket. LogRocket Network Request Monitoringhttps://logrocket.com/signup/

LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.

LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free. Share this:TwitterRedditLinkedInFacebook Idorenyin Obong Follow Software engineer with a flair for writing. Uncategorized #node « How TypeScript design patterns help you write better code Type-safe fetching with gretchen »


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有